Current Issue : October - December Volume : 2017 Issue Number : 4 Articles : 5 Articles
This paper analyses the competitive approach\nto the co-evolutionary training of multi-layer perceptron\nclassifiers. Two algorithms were tested: the first opposes a\npopulation of classifiers to a population of training patterns;\nthe second pits a population of classifiers against a population\nof subsets of training patterns. The classifiers are regarded as\npredators that need to ââ?¬Ë?captureââ?¬â?¢ (correctly categorise) the prey\n(training patterns). Success for the predators is measured on\ntheir ability to capture prey. Success for the prey is measured\non their ability to escape predation (be misclassified). The\naim of the procedure is to create an evolutionary tug-of-war\nbetween the best classifiers and the most difficult data samples,\nincreasing the efficiency and accuracy of the learning\nprocess. The two co-evolutionary algorithms were tested on\na number of well-known benchmarks and on several artificial\ndata sets modelling different kinds of common classification\nproblems such as overlapping data categories, noisy training\ninputs, and unbalanced data classes. The performance\nof the co-evolutionary methods was compared with that of\ntwo traditional training techniques: the standard backpropagation\nrule and a conventional evolutionary algorithm. The\nco-evolutionary procedures achieved top accuracy in all classification\nproblems. They particularly excelled on data sets\ncontaining noisy training inputs, where they outperformed\nthe backpropagation rule, and on tasks involving unbalanced\ndata classes, where they outperformed both backpropagation\nand the conventional evolutionary algorithm. Compared\nto the standard evolutionary algorithm, the co-evolutionary procedures were able to obtain similar or superior learning\naccuracies, whilst needing considerably less presentations\nof the training patterns. This economy in the use of training\npatterns translated into significant savings in computational\noverheads and algorithms running time....
The paper presents the use of a self-organizing feature map (SOFM) for determining damage in reinforced concrete frames with\nshear walls. For this purpose, a concrete frame with a shear wall was subjected to nonlinear dynamic analysis. The SOFM was\noptimized using the genetic algorithm (GA) in order to determine the number of layers, number of nodes in the hidden layer,\ntransfer function type, and learning algorithm. The obtained model was compared with linear regression (LR) and nonlinear\nregression (NonLR) models and also the radial basis function (RBF) of a neural network. It was concluded that the SOFM, when\noptimized with the GA, has more strength, flexibility, and accuracy....
This study investigates predicting the pullout capacity of small ground anchors using nonlinear computing techniques. The inputoutput\nprediction model for the nonlinear Hammerstein-Wiener (NHW) and delay inputs for the adaptive neurofuzzy inference\nsystem (DANFIS) are developed and utilized to predict the pullout capacity.The results of the developedmodels are compared with\nprevious studies that used artificial neural networks and least square support vector machine techniques for the same case study.The\nin situ data collection and statistical performances are used to evaluate the models performance. Results show that the developed\nmodels enhance the precision of predicting the pullout capacity when compared with previous studies. Also, the DANFIS model\nperformance is proven to be better than other models used to detect the pullout capacity of ground anchors....
learning, one of today�s most rapidly growing interdisciplinary fields, promises an\nunprecedented perspective for solving intricate quantum many-body problems. Understanding the physical\naspects of the representative artificial neural-network states has recently become highly desirable in the\napplications of machine-learning techniques to quantum many-body physics. In this paper, we explore the\ndata structures that encode the physical features in the network states by studying the quantum\nentanglement properties, with a focus on the restricted-Boltzmann-machine (RBM) architecture. We\nprove that the entanglement entropy of all short-range RBM states satisfies an area law for arbitrary\ndimensions and bipartition geometry. For long-range RBM states, we show by using an exact construction\nthat such states could exhibit volume-law entanglement, implying a notable capability of RBM in\nrepresenting quantum states with massive entanglement. Strikingly, the neural-network representation for\nthese states is remarkably efficient, in the sense that the number of nonzero parameters scales only linearly\nwith the system size. We further examine the entanglement properties of generic RBM states by randomly\nsampling the weight parameters of the RBM. We find that their averaged entanglement entropy obeys\nvolume-law scaling, and the meantime strongly deviates from the Page entropy of the completely random\npure states.We show that their entanglement spectrum has no universal part associated with random matrix\ntheory and bears a Poisson-type level statistics. Using reinforcement learning, we demonstrate that RBM is\ncapable of finding the ground state (with power-law entanglement) of a model Hamiltonian with a longrange\ninteraction. In addition, we show, through a concrete example of the one-dimensional symmetryprotected\ntopological cluster states, that the RBM representation may also be used as a tool to analytically\ncompute the entanglement spectrum. Our results uncover the unparalleled power of artificial neural\nnetworks in representing quantum many-body states regardless of how much entanglement they possess,\nwhich paves a novel way to bridge computer-science-based machine-learning techniques to outstanding\nquantum condensed-matter physics problems...
The concept of soft sets was initiated by Molodtsov. Then, some operations on soft sets were\ndefined by Maji et al. Later on, the concept of soft topological space was introduced. In this paper,\nwe introduce the concept of the pointwise topology of soft topological spaces. Finally, we investigate\nthe properties of soft mapping spaces and the relationships between some soft mapping spaces....
Loading....